existential risk
AI Consciousness and Existential Risk
In AI, the existential risk denotes the hypothetical threat posed by an artificial system that would possess both the capability and the objective, either directly or indirectly, to eradicate humanity. This issue is gaining prominence in scientific debate due to recent technical advancements and increased media coverage. In parallel, AI progress has sparked speculation and studies about the potential emergence of artificial consciousness. The two questions, AI consciousness and existential risk, are sometimes conflated, as if the former entailed the latter. Here, I explain that this view stems from a common confusion between consciousness and intelligence. Yet these two properties are empirically and theoretically distinct. Arguably, while intelligence is a direct predictor of an AI system's existential threat, consciousness is not. There are, however, certain incidental scenarios in which consciousness could influence existential risk, in either direction. Consciousness could be viewed as a means towards AI alignment, thereby lowering existential risk; or, it could be a precondition for reaching certain capabilities or levels of intelligence, and thus positively related to existential risk. Recognizing these distinctions can help AI safety researchers and public policymakers focus on the most pressing issues.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Germany (0.04)
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
- Research Report (0.64)
- Overview (0.46)
- Government (0.66)
- Health & Medicine > Therapeutic Area > Neurology (0.47)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.46)
- North America > United States (1.00)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Government > Regional Government > North America Government > United States Government (0.93)
- Health & Medicine > Therapeutic Area (0.73)
- Banking & Finance > Economy (0.68)
What are the odds? Risk and uncertainty about AI existential risk
This work is a commentary of the article \href{https://doi.org/10.18716/ojs/phai/2025.2801}{AI Survival Stories: a Taxonomic Analysis of AI Existential Risk} by Cappelen, Goldstein, and Hawthorne. It is not just a commentary though, but a useful reminder of the philosophical limitations of \say{linear} models of risk. The article will focus on the model employed by the authors: first, I discuss some differences between standard Swiss Cheese models and this one. I then argue that in a situation of epistemic indifference the probability of P(D) is higher than what one might first suggest, given the structural relationships between layers. I then distinguish between risk and uncertainty, and argue that any estimation of P(D) is structurally affected by two kinds of uncertainty: option uncertainty and state-space uncertainty. Incorporating these dimensions of uncertainty into our qualitative discussion on AI existential risk can provide a better understanding of the likeliness of P(D).
- Oceania > Australia > Queensland (0.04)
- North America > United States > Massachusetts > Middlesex County > Reading (0.04)
- Europe > Ukraine > Kyiv Oblast > Chernobyl (0.04)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
- Asia > North Korea (0.48)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Ukraine (0.04)
- (2 more...)
- Government (1.00)
- Education (0.70)
AI Safety Should Prioritize the Future of Work
Hazra, Sanchaita, Majumder, Bodhisattwa Prasad, Chakrabarty, Tuhin
Current efforts in AI safety prioritize filtering harmful content, preventing manipulation of human behavior, and eliminating existential risks in cybersecurity or biosecurity. While pressing, this narrow focus overlooks critical human-centric considerations that shape the long-term trajectory of a society. In this position paper, we identify the risks of overlooking the impact of AI on the future of work and recommend comprehensive transition support towards the evolution of meaningful labor with human agency. Through the lens of economic theories, we highlight the intertemporal impacts of AI on human livelihood and the structural changes in labor markets that exacerbate income inequality. Additionally, the closed-source approach of major stakeholders in AI development resembles rent-seeking behavior through exploiting resources, breeding mediocrity in creative labor, and monopolizing innovation. To address this, we argue in favor of a robust international copyright anatomy supported by implementing collective licensing that ensures fair compensation mechanisms for using data to train AI models. We strongly recommend a pro-worker framework of global AI governance to enhance shared prosperity and economic justice while reducing technical debt.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > Utah (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (7 more...)
Mitigating Societal Cognitive Overload in the Age of AI: Challenges and Directions
Societal cognitive overload, driven by the deluge of inform ation and complexity in the AI age, poses a critical challenge to human well-being an d societal resilience. This paper argues that mitigating cognitive overload is not only essential for improving present-day life but also a crucial prerequisite fo r navigating the potential risks of advanced AI, including existential threats. W e exa mine how AI exacerbates cognitive overload through various mechanisms, incl uding information proliferation, algorithmic manipulation, automation anxiet ies, deregulation, and the erosion of meaning. The paper reframes the AI safety debate t o center on cognitive overload, highlighting its role as a bridge between near-te rm harms and long-term risks. It concludes by discussing potential institutional adaptations, research directions, and policy considerations that arise from adopti ng an overload-resilient perspective on human-AI alignment, suggesting pathways fo r future exploration rather than prescribing definitive solutions. W e stand at a precipice. Human societies are increasingly st ruggling to process the sheer volume and complexity of information in the digital age, a conditio n dramatically amplified by the rapid proliferation of artificial intelligence (AI). While Toffle r (1970) foresaw "future shock" from accelerating change and Eppler & Mengis (2004); Bawden & Robin son (2009) analyzed individual information overload, Byung-Chul Han, in his critique of ne oliberalism and technological domination (Han, 2017), argues that contemporary society faces a regime of technological domination that exploits and overwhelms the psyche. This exploitation and overwhelming of the psyche, now dramatically amplified by AI-driven information and comple xity, elevates information overload to a systemic crisis: societal cognitive overload .
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Asia > Singapore (0.04)
- Law (1.00)
- Government (1.00)
- Health & Medicine (0.68)
- Media > News (0.47)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
When Autonomy Breaks: The Hidden Existential Risk of AI
AI risks are typically framed around physical threats to humanity, a loss of control or an accidental error causing humanity's extinction. However, I argue in line with the gradual disempowerment thesis, that there is an underappreciated risk in the slow and irrevocable decline of human autonomy. As AI starts to outcompete humans in various areas of life, a tipping point will be reached where it no longer makes sense to rely on human decision-making, creativity, social care or even leadership. What may follow is a process of gradual de-skilling, where we lose skills that we currently take for granted. Traditionally, it is argued that AI will gain human skills over time, and that these skills are innate and immutable in humans. By contrast, I argue that humans may lose such skills as critical thinking, decision-making and even social care in an AGI world. The biggest threat to humanity is therefore not that machines will become more like humans, but that humans will become more like machines.
- North America > United States > Kansas (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- (8 more...)
- Media (1.00)
- Law (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- (3 more...)
The Economics of p(doom): Scenarios of Existential Risk and Economic Growth in the Age of Transformative AI
Growiec, Jakub, Prettner, Klaus
Recent advances in artificial intelligence (AI) have led to a diverse set of predictions about its long-term impact on humanity. A central focus is the potential emergence of transformative AI (TAI), eventually capable of outperforming humans in all economically valuable tasks and fully automating labor. Discussed scenarios range from human extinction after a misaligned TAI takes over ("AI doom") to unprecedented economic growth and abundance ("post-scarcity"). However, the probabilities and implications of these scenarios remain highly uncertain. Here, we organize the various scenarios and evaluate their associated existential risks and economic outcomes in terms of aggregate welfare. Our analysis shows that even low-probability catastrophic outcomes justify large investments in AI safety and alignment research. We find that the optimizing representative individual would rationally allocate substantial resources to mitigate extinction risk; in some cases, she would prefer not to develop TAI at all. This result highlights that current global efforts in AI safety and alignment research are vastly insufficient relative to the scale and urgency of existential risks posed by TAI. Our findings therefore underscore the need for stronger safeguards to balance the potential economic benefits of TAI with the prevention of irreversible harm. Addressing these risks is crucial for steering technological progress toward sustainable human prosperity.
- North America > United States (1.00)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > Poland (0.14)
- (2 more...)
- Government > Military (1.00)
- Banking & Finance > Economy (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Robots (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.67)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.67)
Position: AI agents should be regulated based on autonomous action sequences
This position paper argues that AI agents should be regulated based on the sequence of actions they autonomously take. AI agents with long-term planning and strategic capabilities can pose significant risks of human extinction and irreversible global catastrophes. While existing regulations often focus on computational scale as a proxy for potential harm, we contend that such measures are insufficient for assessing the risks posed by AI agents whose capabilities arise primarily from inference-time computation. To support our position, we discuss relevant regulations and recommendations from AI scientists regarding existential risks, as well as the advantages of action sequences over existing impact measures that require observing environmental states.
- North America > Canada (0.14)
- Asia > Thailand (0.14)
- North America > United States > Florida (0.14)
- (2 more...)
- Research Report (1.00)
- Overview (1.00)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Information Technology (0.67)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.48)
Who's Driving? Game Theoretic Path Risk of AGI Development
Who controls the development of Artificial General Intelligence (AGI) might matter less than how we handle the fight for control itself. We formalize this "steering wheel problem" as humanity's greatest near-term existential risk may stem not from misaligned AGI, but from the dynamics of competing to develop it. Just as a car crash can occur from passengers fighting over the wheel before reaching any destination, catastrophic outcomes could arise from development competition long before AGI exists. While technical alignment research focuses on ensuring safe arrival, we show how coordination failures during development could drive us off the cliff first. We present a game theoretic framework modeling AGI development dynamics and prove conditions for sustainable cooperative equilibria. Drawing from nuclear control while accounting for AGI's unique characteristics, we propose concrete mechanisms including pre-registration, shared technical infrastructure, and automated deterrence to stabilize cooperation. Our key insight is that AGI creates network effects in safety: shared investments become more valuable as participation grows, enabling mechanism designs where cooperation dominates defection. This work bridges formal methodology and policy frameworks, providing foundations for practical governance of AGI competition risks.
- North America > United States (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China (0.04)
- Government > Military (1.00)
- Energy (0.68)
- Transportation (0.67)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.46)